starstarstarstarstar_border
This course provides you to be able to build Deep Neural Networks models for different business domains with one of the most common machine learning library TensorFlow provided by Google AI team. The both concept of deep learning and its applications will be mentioned in this course. Also, we will focus on Keras. We will also focus on the advanced topics in this lecture such as transfer learning, autoencoders, face recognition (including those models: VGG-Face, Google FaceNet, OpenFace and Facebook DeepFace). This course appeals to ones who interested in Machine Learning, Data Science and AI. Also, you don't have to be attend any ML course before.
    starstarstarstarstar_half
    Welcome to Cutting-Edge AI! This is technically Deep Learning in Python part 11 of my deep learning series, and my 3rd reinforcement learning course. Deep Reinforcement Learning is actually the combination of 2 topics: Reinforcement Learning and Deep Learning (Neural Networks) . While both of these have been around for quite some time, it’s only been recently that Deep Learning has really taken off, and along with it, Reinforcement Learning. The maturation of deep learning has propelled advances in reinforcement learning, which has been around since the 1980s, although some aspects of it, such as the Bellman equation, have been for much longer. Recently, these advances have allowed us to showcase just how powerful reinforcement learning can be. We’ve seen how AlphaZero can master the game of Go using only self-play. This is just a few years after the original AlphaGo already beat a world champion in Go. We’ve seen real-world robots learn how to walk, and even recover after being kicked over, despite only being trained using simulation. Simulation is nice because it doesn’t require actual hardware, which is expensive. If your agent falls down, no real damage is done. We’ve seen real-world robots learn hand dexterity, which is no small feat. Walking is one thing, but that involves coarse movements. Hand dexterity is complex - you have many degrees of freedom and many of the forces involved are extremely subtle. Imagine using your foot to do something you usually do with your hand, and you immediately understand why this would be difficult. Last but not least - video games. Even just considering the past few months, we’ve seen some amazing developments. AIs are now beating professional players in CS:GO and Dota 2 . So what makes this course different from the first two? Now that we know deep learning works with reinforcement learning, the question becomes: how do we improve these algorithms? This course is going to show you a few different ways: including the powerful A2C (Advantage Actor-Critic) algorithm, the DDPG (Deep Deterministic Policy Gradient) algorithm, and evolution strategies . Evolution strategies is a new and fresh take on reinforcement learning, that kind of throws away all the old theory in favor of a more "black box" approach, inspired by biological evolution. What’s also great about this new course is the variety of environments we get to look at. First, we’re going to look at the classic Atari environments. These are important because they show that reinforcement learning agents can learn based on images alone. Second, we’re going to look at MuJoCo , which is a physics simulator. This is the first step to building a robot that can navigate the real-world and understand physics - we first have to show it can work with simulated physics. Finally, we’re going to look at Flappy Bird , everyone’s favorite mobile game just a few years ago. Thanks for reading, and I’ll see you in class! "If you can't implement it, you don't understand it" Or as the great physicist Richard Feynman said: "What I cannot create, I do not understand". My courses are the ONLY courses where you will learn how to implement machine learning algorithms from scratch Other courses will teach you how to plug in your data into a library, but do you really need help with 3 lines of code? After doing the same thing with 10 datasets, you realize you didn't learn 10 things. You learned 1 thing, and just repeated the same 3 lines of code 10 times... Suggested prerequisites: Calculus Probability Object-oriented programming Python coding: if/else, loops, lists, dicts, sets Numpy coding: matrix and vector operations Linear regression Gradient descent Know how to build a convolutional neural network (CNN) in TensorFlow Markov Decision Proccesses (MDPs) WHAT ORDER SHOULD I TAKE YOUR COURSES IN?: Check out the lecture "Machine Learning and AI Prerequisite Roadmap" (available in the FAQ of any of my courses, including the free Numpy course)
      starstarstarstar_half star_border
      Learn how we implemented Deep Learning Object Detection Models on Raspberry Pi and accelerated them with Intel Movidius Neural Compute Stick. When we first got started in Deep Learning particularly in Computer Vision, we were really excited at the possibilities of this technology to help people. The only problem is, that image classification and object detection runs just fine on our expensive, power consuming and bulky Deep Learning machines. However, not everyone can afford or implement AI for their practical applications. This is when we went searching for an affordable, compact, less power hungry alternative. Generally if we'd want to shrink our IoT and automation projects, we'd often look to the Raspberry Pi which is versatile computing solution for numerous problems. This made us ponder about how we can port out deep learning models to this compact computing unit. Not only that, but how could we run it at close to real-time? Amongst the possible solutions we arrived at using the raspberry pi in conjunction with an AI Accelerator USB stick that was made by Intel to boost our object detection frame-rate. However it was not so simple to get it up and running. Implementing the documentation, we landed up with a series of bugs after bugs, which became a bit tedious. After endless posts on forums, tutorials and blogs, we have documented a seamless guide in the form of this course; which will show you, step-by-step, on how to implement your own Deep Learning Object Detection models on video and webcam without all the wasteful debugging. So essentially, we've structured this training to reduce debugging , speed up your time to market and get you results sooner . In this course, here's some of the things that you will learn: Getting Started with Raspberry Pi even if you are a beginner, Deep Learning Basics, Object Detection Models - Pros and Cons of each CNN, Setup and Install Movidius Neural Compute Stick (NCS) SDK, Currently, the OpenVINO is available for Raspbian, so the NCS2 is already compatible with the Raspberry Pi, but this course is mainly for the Movidius (NCS version 1). Run Yolo and Mobilenet SSD object detection models in recorded or live video You also get helpful bonuses: *OpenCV CPU inference *Introduction to Custom Model Training Personal help within the course I donate my time to regularly hold office hours with students. During the office hours you can ask me any business question you want, and I will do my best to help you. The office hours are free. I don't try to sell anything. Students can start discussions and message me with private questions. I answer 99% of questions within 24 hours. I love helping students who take my courses and I look forward to helping you. I regularly update this course to reflect the current marketing landscape. Get a Career Boost with a Certificate of Completion Upon completing 100% of this course, you will be emailed a certificate of completion. You can show it as proof of your expertise and that you have completed a certain number of hours of instruction. If you want to get a marketing job or freelancing clients, a certificate from this course can help you appear as a stronger candidate for Artificial Intelligence jobs. Money-Back Guarantee The course comes with an unconditional, Udemy-backed, 30-day money-back guarantee. This is not just a guarantee, it's my personal promise to you that I will go out of my way to help you succeed just like I've done for thousands of my other students. Let me help you get fast results.  Enroll now, by clicking the button and let us show you how to develop Accelerated AI on Raspberry Pi.
        starstarstarstarstar_half
        Latest update : Instead of SSD, I show you how to use RetinaNet, which is better and more modern. I show you both how to use a pretrained model and how to train one yourself with a custom dataset on Google Colab . This is one of the most exciting courses I’ve done and it really shows how fast and how far deep learning has come over the years. When I first started my deep learning series, I didn’t ever consider that I’d make two courses on convolutional neural networks . I think what you’ll find is that, this course is so entirely different from the previous one, you will be impressed at just how much material we have to cover. Let me give you a quick rundown of what this course is all about: We’re going to bridge the gap between the basic CNN architecture you already know and love, to modern, novel architectures such as VGG , ResNet , and Inception (named after the movie which by the way, is also great!) We’re going to apply these to images of blood cells, and create a system that is a better medical expert than either you or I. This brings up a fascinating idea: that the doctors of the future are not humans, but robots. In this course, you’ll see how we can turn a CNN into an object detection system, that not only classifies images but can locate each object in an image and predict its label. You can imagine that such a task is a basic prerequisite for self-driving vehicles . (It must be able to detect cars, pedestrians, bicycles, traffic lights, etc. in real-time) We’ll be looking at a state-of-the-art algorithm called SSD which is both faster and more accurate than its predecessors. Another very popular computer vision task that makes use of CNNs is called neural style transfer . This is where you take one image called the content image, and another image called the style image, and you combine these to make an entirely new image, that is as if you hired a painter to paint the content of the first image with the style of the other. Unlike a human painter, this can be done in a matter of seconds. I will also introduce you to the now-famous GAN architecture ( Generative Adversarial Networks ), where you will learn some of the technology behind how neural networks are used to generate state-of-the-art, photo-realistic images. Currently, we also implement object localization , which is an essential first step toward implementing a full object detection system. I hope you’re excited to learn about these advanced applications of CNNs, I’ll see you in class! AWESOME FACTS: One of the major themes of this course is that we’re moving away from the CNN itself, to systems involving CNNs. Instead of focusing on the detailed inner workings of CNNs (which we've already done), we'll focus on high-level building blocks. The result? Almost zero math . Another result? No complicated low-level code such as that written in Tensorflow , Theano , or PyTorch (although some optional exercises may contain them for the very advanced students). Most of the course will be in Keras which means a lot of the tedious, repetitive stuff is written for you. "If you can't implement it, you don't understand it" Or as the great physicist Richard Feynman said: "What I cannot create, I do not understand". My courses are the ONLY courses where you will learn how to implement machine learning algorithms from scratch Other courses will teach you how to plug in your data into a library, but do you really need help with 3 lines of code? After doing the same thing with 10 datasets, you realize you didn't learn 10 things. You learned 1 thing, and just repeated the same 3 lines of code 10 times... Suggested Prerequisites: Know how to build, train, and use a CNN using some library (preferably in Python) Understand basic theoretical concepts behind convolution and neural networks Decent Python coding skills, preferably in data science and the Numpy Stack WHAT ORDER SHOULD I TAKE YOUR COURSES IN?: Check out the lecture "Machine Learning and AI Prerequisite Roadmap" (available in the FAQ of any of my courses, including the free Numpy course)
          starstarstarstarstar_half
          || DATA SCIENCE || Data science continues to evolve as one of the most promising and in-demand career paths for skilled professionals. Today, successful data professionals understand that they must advance past the traditional skills of analyzing large amounts of data, data mining, and programming skills. What Does a Data Scientist Do? In the past decade, data scientists have become necessary assets and are present in almost all organizations. These professionals are well-rounded, data-driven individuals with high-level technical skills who are capable of building complex quantitative algorithms to organize and synthesize large amounts of information used to answer questions and drive strategy in their organization. This is coupled with the experience in communication and leadership needed to deliver tangible results to various stakeholders across an organization or business. Data scientists need to be curious and result-oriented, with exceptional industry-specific knowledge and communication skills that allow them to explain highly technical results to their non-technical counterparts. They possess a strong quantitative background in statistics and linear algebra as well as programming knowledge with focuses in data warehousing, mining, and modeling to build and analyze algorithms. Glassdoor ranked data scientist as the #1 Best Job in America in 2018 for the third year in a row. 4 As increasing amounts of data become more accessible, large tech companies are no longer the only ones in need of data scientists. The growing demand for data science professionals across industries, big and small, is being challenged by a shortage of qualified candidates available to fill the open positions. The need for data scientists shows no sign of slowing down in the coming years. LinkedIn listed data scientist as one of the most promising jobs in 2017 and 2018, along with multiple data-science-related skills as the most in-demand by companies. Where Do You Fit in Data Science? Data is everywhere and expansive. A variety of terms related to mining, cleaning, analyzing, and interpreting data are often used interchangeably, but they can actually involve different skill sets and complexity of data. Data Scientist Data scientists examine which questions need answering and where to find the related data. They have business acumen and analytical skills as well as the ability to mine, clean, and present data. Businesses use data scientists to source, manage, and analyze large amounts of unstructured data. Results are then synthesized and communicated to key stakeholders to drive strategic decision-making in the organization. Skills needed: Programming skills (SAS, R, Python), statistical and mathematical skills, storytelling and data visualization, Hadoop, SQL, machine learning Data Analyst Data analysts bridge the gap between data scientists and business analysts. They are provided with the questions that need answering from an organization and then organize and analyze data to find results that align with high-level business strategy. Data analysts are responsible for translating technical analysis to qualitative action items and effectively communicating their findings to diverse stakeholders. Skills needed: Programming skills (SAS, R, Python), statistical and mathematical skills, data wrangling, data visualization Data Engineer Data engineers manage exponential amounts of rapidly changing data. They focus on the development, deployment, management, and optimization of data pipelines and infrastructure to transform and transfer data to data scientists for querying. Skills needed: Programming languages (Java, Scala), NoSQL databases (MongoDB, Cassandra DB), frameworks (Apache Hadoop) Data Science Career Outlook and Salary Opportunities Data science professionals are rewarded for their highly technical skill set with competitive salaries and great job opportunities at big and small companies in most industries. With over 4,500 open positions listed on Glassdoor, data science professionals with the appropriate experience and education have the opportunity to make their mark in some of the most forward-thinking companies in the world. So, Data Science is primarily used to make decisions and predictions making use of predictive causal analytics, prescriptive analytics (predictive plus decision science) and machine learning. Predictive causal analytics – If you want a model which can predict the possibilities of a particular event in the future, you need to apply predictive causal analytics. Say, if you are providing money on credit, then the probability of customers making future credit payments on time is a matter of concern for you. Here, you can build a model which can perform predictive analytics on the payment history of the customer to predict if the future payments will be on time or not. Prescriptive analytics: If you want a model which has the intelligence of taking its own decisions and the ability to modify it with dynamic parameters, you certainly need prescriptive analytics for it. This relatively new field is all about providing advice. In other terms, it not only predicts but suggests a range of prescribed actions and associated outcomes. The best example for this is Google’s self-driving car which I had discussed earlier too. The data gathered by vehicles can be used to train self-driving cars. You can run algorithms on this data to bring intelligence to it. This will enable your car to take decisions like when to turn, which path to take, when to slow down or speed up. Machine learning for making predictions — If you have transactional data of a finance company and need to build a model to determine the future trend, then machine learning algorithms are the best bet. This falls under the paradigm of supervised learning. It is called supervised because you already have the data based on which you can train your machines. For example, a fraud detection model can be trained using a historical record of fraudulent purchases. Machine learning for pattern discovery — If you don’t have the parameters based on which you can make predictions, then you need to find out the hidden patterns within the dataset to be able to make meaningful predictions. This is nothing but the unsupervised model as you don’t have any predefined labels for grouping. The most common algorithm used for pattern discovery is Clustering. Let’s say you are working in a telephone company and you need to establish a network by putting towers in a region. Then, you can use the clustering technique to find those tower locations which will ensure that all the users receive optimum signal strength. || DEEP LEARNING || Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before. In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers. How does deep learning attain such impressive results? In a word, accuracy. Deep learning achieves recognition accuracy at higher levels than ever before. This helps consumer electronics meet user expectations, and it is crucial for safety-critical applications like driverless cars. Recent advances in deep learning have improved to the point where deep learning outperforms humans in some tasks like classifying objects in images. While deep learning was first theorized in the 1980s, there are two main reasons it has only recently become useful: Deep learning requires large amounts of labeled data . For example, driverless car development requires millions of images and thousands of hours of video. Deep learning requires substantial computing power . High-performance GPUs have a parallel architecture that is efficient for deep learning. When combined with clusters or cloud computing, this enables development teams to reduce training time for a deep learning network from weeks to hours or less. Examples of Deep Learning at Work Deep learning applications are used in industries from automated driving to medical devices. Automated Driving: Automotive researchers are using deep learning to automatically detect objects such as stop signs and traffic lights. In addition, deep learning is used to detect pedestrians, which helps decrease accidents. Aerospace and Defense: Deep learning is used to identify objects from satellites that locate areas of interest, and identify safe or unsafe zones for troops. Medical Research: Cancer researchers are using deep learning to automatically detect cancer cells. Teams at UCLA built an advanced microscope that yields a high-dimensional data set used to train a deep learning application to accurately identify cancer cells. Industrial Automation: Deep learning is helping to improve worker safety around heavy machinery by automatically detecting when people or objects are within an unsafe distance of machines. Electronics: Deep learning is being used in automated hearing and speech translation. For example, home assistance devices that respond to your voice and know your preferences are powered by deep learning applications. What's the Difference Between Machine Learning and Deep Learning? Deep learning is a specialized form of machine learning. A machine learning workflow starts with relevant features being manually extracted from images. The features are then used to create a model that categorizes the objects in the image. With a deep learning workflow, relevant features are automatically extracted from images. In addition, deep learning performs “end-to-end learning” – where a network is given raw data and a task to perform, such as classification, and it learns how to do this automatically. Another key difference is deep learning algorithms scale with data, whereas shallow learning converges. Shallow learning refers to machine learning methods that plateau at a certain level of performance when you add more examples and training data to the network. A key advantage of deep learning networks is that they often continue to improve as the size of your data increases. || MACHINE LEARNING || What is the definition of machine learning? Machine-learning algorithms use statistics to find patterns in massive* amounts of data. And data, here, encompasses a lot of things—numbers, words, images, clicks, what have you. If it can be digitally stored, it can be fed into a machine-learning algorithm. Machine learning is the process that powers many of the services we use today—recommendation systems like those on Netflix, YouTube, and Spotify; search engines like Google and Baidu; social-media feeds like Facebook and Twitter; voice assistants like Siri and Alexa. The list goes on. In all of these instances, each platform is collecting as much data about you as possible—what genres you like watching, what links you are clicking, which statuses you are reacting to—and using machine learning to make a highly educated guess about what you might want next. Or, in the case of a voice assistant, about which words match best with the funny sounds coming out of your mouth. WHY IS MACHINE LEARNING SO SUCCESSFUL? While machine learning is not a new technique, interest in the field has exploded in recent years. This resurgence comes on the back of a series of breakthroughs, with deep learning setting new records for accuracy in areas such as speech and language recognition, and computer vision. What's made these successes possible are primarily two factors, one being the vast quantities of images, speech, video and text that is accessible to researchers looking to train machine-learning systems. But even more important is the availability of vast amounts of parallel-processing power, courtesy of modern graphics processing units (GPUs), which can be linked together into clusters to form machine-learning powerhouses. Today anyone with an internet connection can use these clusters to train machine-learning models, via cloud services provided by firms like Amazon, Google and Microsoft. As the use of machine-learning has taken off, so companies are now creating specialized hardware tailored to running and training machine-learning models. An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained. These chips are not just used to train models for Google DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end GPUs, and the recently announced third-generation TPUs able to accelerate training and inference even further. As hardware becomes increasingly specialized and machine-learning software frameworks are refined, it's becoming increasingly common for ML tasks to be carried out on consumer-grade phones and computers, rather than in cloud datacenters. In the summer of 2018, Google took a step towards offering the same quality of automated translation on phones that are offline as is available online, by rolling out local neural machine translation for 59 languages to the Google Translate app for iOS and Android. ||Natural Language Processing|| Large volumes of textual data Natural language processing helps computers communicate with humans in their own language and scales other language-related tasks. For example, NLP makes it possible for computers to read text, hear speech, interpret it, measure sentiment and determine which parts are important. Today’s machines can analyze more language-based data than humans, without fatigue and in a consistent, unbiased way. Considering the staggering amount of unstructured data that’s generated every day, from medical records to social media, automation will be critical to fully analyze text and speech data efficiently. Structuring a highly unstructured data source Human language is astoundingly complex and diverse. We express ourselves in infinite ways, both verbally and in writing. Not only are there hundreds of languages and dialects, but within each language is a unique set of grammar and syntax rules, terms and slang. When we write, we often misspell or abbreviate words, or omit punctuation. When we speak, we have regional accents, and we mumble, stutter and borrow terms from other languages. While supervised and unsupervised learning, and specifically deep learning, are now widely used for modeling human language, there’s also a need for syntactic and semantic understanding and domain expertise that are not necessarily present in these machine learning approaches. NLP is important because it helps resolve ambiguity in language and adds useful numeric structure to the data for many downstream applications, such as speech recognition or text analytics. How does NLP work? Breaking down the elemental pieces of language Natural language processing includes many different techniques for interpreting human language, ranging from statistical and machine learning methods to rules-based and algorithmic approaches. We need a broad array of approaches because the text- and voice-based data varies widely, as do the practical applications. Basic NLP tasks include tokenization and parsing, lemmatization/stemming, part-of-speech tagging, language detection and identification of semantic relationships. If you ever diagramed sentences in grade school, you’ve done these tasks manually before. In general terms, NLP tasks break down language into shorter, elemental pieces, try to understand relationships between the pieces and explore how the pieces work together to create meaning. These underlying tasks are often used in higher-level NLP capabilities, such as: Content categorization . A linguistic-based document summary, including search and indexing, content alerts and duplication detection. Topic discovery and modeling. Accurately capture the meaning and themes in text collections, and apply advanced analytics to text, like optimization and forecasting. Contextual extraction. Automatically pull structured information from text-based sources. Sentiment analysis. Identifying the mood or subjective opinions within large amounts of text, including average sentiment and opinion mining. Speech-to-text and text-to-speech conversion. Transforming voice commands into written text, and vice versa. Document summarization. Automatically generating synopses of large bodies of text. Machine translation. Automatic translation of text or speech from one language to another. In all these cases, the overarching goal is to take raw language input and use linguistics and algorithms to transform or enrich the text in such a way that it delivers greater value. || R Language || What is R? R is a programming language developed by Ross Ihaka and Robert Gentleman in 1993. R possesses an extensive catalog of statistical and graphical methods. It includes machine learning algorithms, linear regression, time series, statistical inference to name a few. Most of the R libraries are written in R, but for heavy computational tasks, C, C++ and Fortran codes are preferred. R is not only entrusted by academic, but many large companies also use R programming language, including Uber, Google, Airbnb, Facebook and so on. Data analysis with R is done in a series of steps; programming, transforming, discovering, modeling and communicate the results Program : R is a clear and accessible programming tool Transform : R is made up of a collection of libraries designed specifically for data science Discover : Investigate the data, refine your hypothesis and analyze them Model : R provides a wide array of tools to capture the right model for your data Communicate : Integrate codes, graphs, and outputs to a report with R Markdown or build Shiny apps to share with the world What is R used for? Statistical inference Data analysis Machine learning algorithm R package The primary uses of R is and will always be, statistic, visualization, and machine learning. The picture below shows which R package got the most questions in Stack Overflow. In the top 10, most of them are related to the workflow of a data scientist: data preparation and communicate the results.
            starstarstarstarstar_half
            *** As seen on Kickstarter *** Artificial intelligence is growing exponentially. There is no doubt about that. Self-driving cars are clocking up millions of miles, IBM Watson is diagnosing patients better than armies of doctors and Google Deepmind's AlphaGo beat the World champion at Go - a game where intuition plays a key role. But the further AI advances, the more complex become the problems it needs to solve. And only Deep Learning can solve such complex problems and that's why it's at the heart of Artificial intelligence. --- Why Deep Learning A-Z? --- Here are five reasons we think Deep Learning A-Z™ really is different, and stands out from the crowd of other training programs out there: 1. ROBUST STRUCTURE The first and most important thing we focused on is giving the course a robust structure. Deep Learning is very broad and complex and to navigate this maze you need a clear and global vision of it. That's why we grouped the tutorials into two volumes, representing the two fundamental branches of Deep Learning: Supervised Deep Learning and Unsupervised Deep Learning. With each volume focusing on three distinct algorithms, we found that this is the best structure for mastering Deep Learning. 2. INTUITION TUTORIALS So many courses and books just bombard you with the theory, and math, and coding... But they forget to explain, perhaps, the most important part: why you are doing what you are doing. And that's how this course is so different. We focus on developing an intuitive *feel* for the concepts behind Deep Learning algorithms. With our intuition tutorials you will be confident that you understand all the techniques on an instinctive level. And once you proceed to the hands-on coding exercises you will see for yourself how much more meaningful your experience will be. This is a game-changer. 3. EXCITING PROJECTS Are you tired of courses based on over-used, outdated data sets? Yes? Well then you're in for a treat. Inside this class we will work on Real-World datasets, to solve Real-World business problems. (Definitely not the boring iris or digit classification datasets that we see in every course). In this course we will solve six real-world challenges: Artificial Neural Networks to solve a Customer Churn problem Convolutional Neural Networks for Image Recognition Recurrent Neural Networks to predict Stock Prices Self-Organizing Maps to investigate Fraud Boltzmann Machines to create a Recomender System Stacked Autoencoders* to take on the challenge for the Netflix $1 Million prize *Stacked Autoencoders is a brand new technique in Deep Learning which didn't even exist a couple of years ago. We haven't seen this method explained anywhere else in sufficient depth. 4. HANDS-ON CODING In Deep Learning A-Z™ we code together with you. Every practical tutorial starts with a blank page and we write up the code from scratch. This way you can follow along and understand exactly how the code comes together and what each line means. In addition, we will purposefully structure the code in such a way so that you can download it and apply it in your own projects. Moreover, we explain step-by-step where and how to modify the code to insert YOUR dataset, to tailor the algorithm to your needs, to get the output that you are after. This is a course which naturally extends into your career. 5. IN-COURSE SUPPORT Have you ever taken a course or read a book where you have questions but cannot reach the author? Well, this course is different. We are fully committed to making this the most disruptive and powerful Deep Learning course on the planet. With that comes a responsibility to constantly be there when you need our help. In fact, since we physically also need to eat and sleep we have put together a team of professional Data Scientists to help us out. Whenever you ask a question you will get a response from us within 48 hours maximum. No matter how complex your query, we will be there. The bottom line is we want you to succeed. --- The Tools --- Tensorflow and Pytorch are the two most popular open-source libraries for Deep Learning. In this course you will learn both! TensorFlow was developed by Google and is used in their speech recognition system, in the new google photos product, gmail, google search and much more. Companies using Tensorflow include AirBnb, Airbus, Ebay, Intel, Uber and dozens more. PyTorch is as just as powerful and is being developed by researchers at Nvidia and leading universities: Stanford, Oxford, ParisTech. Companies using PyTorch include Twitter, Saleforce and Facebook. So which is better and for what? Well, in this course you will have an opportunity to work with both and understand when Tensorflow is better and when PyTorch is the way to go. Throughout the tutorials we compare the two and give you tips and ideas on which could work best in certain circumstances. The interesting thing is that both these libraries are barely over 1 year old. That's what we mean when we say that in this course we teach you the most cutting edge Deep Learning models and techniques. --- More Tools --- Theano is another open source deep learning library. It's very similar to Tensorflow in its functionality, but nevertheless we will still cover it. Keras is an incredible library to implement Deep Learning models. It acts as a wrapper for Theano and Tensorflow. Thanks to Keras we can create powerful and complex Deep Learning models with only a few lines of code. This is what will allow you to have a global vision of what you are creating. Everything you make will look so clear and structured thanks to this library, that you will really get the intuition and understanding of what you are doing. --- Even More Tools --- Scikit-learn the most practical Machine Learning library. We will mainly use it: to evaluate the performance of our models with the most relevant technique, k-Fold Cross Validation to improve our models with effective Parameter Tuning to preprocess our data, so that our models can learn in the best conditions And of course, we have to mention the usual suspects. This whole course is based on Python and in every single section you will be getting hours and hours of invaluable hands-on practical coding experience. Plus, throughout the course we will be using Numpy to do high computations and manipulate high dimensional arrays, Matplotlib to plot insightful charts and Pandas to import and manipulate datasets the most efficiently. --- Who Is This Course For? --- As you can see, there are lots of different tools in the space of Deep Learning and in this course we make sure to show you the most important and most progressive ones so that when you're done with Deep Learning A-Z™ your skills are on the cutting edge of today's technology. If you are just starting out into Deep Learning, then you will find this course extremely useful. Deep Learning A-Z™ is structured around special coding blueprint approaches meaning that you won't get bogged down in unnecessary programming or mathematical complexities and instead you will be applying Deep Learning techniques from very early on in the course. You will build your knowledge from the ground up and you will see how with every tutorial you are getting more and more confident. If you already have experience with Deep Learning, you will find this course refreshing, inspiring and very practical. Inside Deep Learning A-Z™ you will master some of the most cutting-edge Deep Learning algorithms and techniques (some of which didn't even exist a year ago) and through this course you will gain an immense amount of valuable hands-on experience with real-world business challenges. Plus, inside you will find inspiration to explore new Deep Learning skills and applications. --- Real-World Case Studies --- Mastering Deep Learning is not just about knowing the intuition and tools, it's also about being able to apply these models to real-world scenarios and derive actual measurable results for the business or project. That's why in this course we are introducing six exciting challenges: #1 Churn Modelling Problem In this part you will be solving a data analytics challenge for a bank. You will be given a dataset with a large sample of the bank's customers. To make this dataset, the bank gathered information such as customer id, credit score, gender, age, tenure, balance, if the customer is active, has a credit card, etc. During a period of 6 months, the bank observed if these customers left or stayed in the bank. Your goal is to make an Artificial Neural Network that can predict, based on geo-demographical and transactional information given above, if any individual customer will leave the bank or stay (customer churn). Besides, you are asked to rank all the customers of the bank, based on their probability of leaving. To do that, you will need to use the right Deep Learning model, one that is based on a probabilistic approach. If you succeed in this project, you will create significant added value to the bank. By applying your Deep Learning model the bank may significantly reduce customer churn. #2 Image Recognition In this part, you will create a Convolutional Neural Network that is able to detect various objects in images. We will implement this Deep Learning model to recognize a cat or a dog in a set of pictures. However, this model can be reused to detect anything else and we will show you how to do it - by simply changing the pictures in the input folder. For example, you will be able to train the same model on a set of brain images, to detect if they contain a tumor or not. But if you want to keep it fitted to cats and dogs, then you will literally be able to a take a picture of your cat or your dog, and your model will predict which pet you have. We even tested it out on Hadelin’s dog! #3 Stock Price Prediction In this part, you will create one of the most powerful Deep Learning models. We will even go as far as saying that you will create the Deep Learning model closest to “Artificial Intelligence” . Why is that? Because this model will have long-term memory, just like us, humans. The branch of Deep Learning which facilitates this is Recurrent Neural Networks. Classic RNNs have short memory, and were neither popular nor powerful for this exact reason. But a recent major improvement in Recurrent Neural Networks gave rise to the popularity of LSTMs (Long Short Term Memory RNNs) which has completely changed the playing field. We are extremely excited to include these cutting-edge deep learning methods in our course! In this part you will learn how to implement this ultra-powerful model, and we will take the challenge to use it to predict the real Google stock price. A similar challenge has already been faced by researchers at Stanford University and we will aim to do at least as good as them. #4 Fraud Detection According to a recent report published by Markets & Markets the Fraud Detection and Prevention Market is going to be worth $33.19 Billion USD by 2021. This is a huge industry and the demand for advanced Deep Learning skills is only going to grow. That’s why we have included this case study in the course. This is the first part of Volume 2 - Unsupervised Deep Learning Models. The business challenge here is about detecting fraud in credit card applications. You will be creating a Deep Learning model for a bank and you are given a dataset that contains information on customers applying for an advanced credit card. This is the data that customers provided when filling the application form. Your task is to detect potential fraud within these applications. That means that by the end of the challenge, you will literally come up with an explicit list of customers who potentially cheated on their applications. #5 & 6 Recommender Systems From Amazon product suggestions to Netflix movie recommendations - good recommender systems are very valuable in today's World. And specialists who can create them are some of the top-paid Data Scientists on the planet. We will work on a dataset that has exactly the same features as the Netflix dataset: plenty of movies, thousands of users, who have rated the movies they watched. The ratings go from 1 to 5, exactly like in the Netflix dataset, which makes the Recommender System more complex to build than if the ratings were simply “Liked” or “Not Liked”. Your final Recommender System will be able to predict the ratings of the movies the customers didn’t watch. Accordingly, by ranking the predictions from 5 down to 1, your Deep Learning model will be able to recommend which movies each user should watch. Creating such a powerful Recommender System is quite a challenge so we will give ourselves two shots. Meaning we will build it with two different Deep Learning models. Our first model will be Deep Belief Networks, complex Boltzmann Machines that will be covered in Part 5. Then our second model will be with the powerful AutoEncoders, my personal favorites. You will appreciate the contrast between their simplicity, and what they are capable of. And you will even be able to apply it to yourself or your friends. The list of movies will be explicit so you will simply need to rate the movies you already watched, input your ratings in the dataset, execute your model and voila! The Recommender System will tell you exactly which movies you would love one night you if are out of ideas of what to watch on Netflix! --- Summary --- In conclusion, this is an exciting training program filled with intuition tutorials, practical exercises and real-World case studies. We are super enthusiastic about Deep Learning and hope to see you inside the class! Kirill & Hadelin
              starstarstarstar_border star_border
              Deep learning is the next big thing. It’s a part of machine learning. Its favorable results in applications with huge and complex data is remarkable. R programming language is very popular among data miners and statisticians. Deep learning refers to artificial neural networks that are composed of many layers. Deep learning is a powerful set of techniques for finding accurate information from raw data. This comprehensive 2-in-1 course will help you explore and create intelligent systems using deep learning techniques. You’ll understand the usage of multiple applications like Natural Language Processing, bioinformatics, recommendation engines, etc. where deep learning models are implemented. You’ll get hands on with various deep learning scenarios and get mind blowing insights from your data. You’ll be able to master the intricacies of R deep learning packages such as TensorFlow. You’ll also learn deep learning in different domains using practical examples from text, image, and speech. Contents and Overview This training program includes 2 complete courses, carefully chosen to give you the most comprehensive training possible. The first course, Deep Learning with R, covers videos that will teach you how to leverage deep learning to make sense of your raw data by exploring various hidden layers of data. Each video in this course provides a clear and concise introduction of a key topic, one or more example of implementations of these concepts in R, and guidance for additional learning, exploration, and application of the skills learned therein. You’ll start by understanding the basics of deep learning and artificial neural networks and move on to exploring advanced ANN’s and RNN’s. You’ll dive deep into convolutional neural networks and unsupervised learning. You’ll also learn about the applications of deep learning in various fields and understand the practical implementations of Scalability, HPC and Feature Engineering. Finally, starting out at a basic level, you’ll be learning how to develop and implement deep learning algorithms using R in real world scenarios. The second course, R Deep Learning Solutions, covers powerful, independent videos to build deep learning models in different application areas using R libraries. It will help you resolve problems during the execution of different tasks in deep learning, neural networks, and advanced machine learning techniques. You’ll start with different packages in deep learning, neural networks, and structures. You’ll also encounter the applications in text mining and processing along with a comparison between CPU and GPU performance. Finally, you’ll explore complex deep learning algorithms and various deep learning packages and libraries in R. By the end of this training program you’ll be able to to develop and implement deep learning algorithms using R in real world scenarios and have an understanding of different deep learning packages so you’ll have the most appropriate solutions for your problems. About the Authors Vincenzo Lomonaco is a Deep Learning PhD student at the University of Bologna and founder of (ContinuousAI).com an open source project aiming to connect people and reorganize resources in the context of Continuous Learning and AI. He is also the PhD students' representative at the Department of Computer Science of Engineering (DISI) and teaching assistant of the courses “Machine Learning” and “Computer Architectures” in the same department. Previously, he was a Machine Learning software engineer at IDL in-line Devices and a Master Student at the University of Bologna where he graduated cum laude in 2015 with the dissertation “Deep Learning for Computer Vision: A comparison between CNNs and HTMs on object recognition tasks". Dr. PKS Prakash is a data scientist and an author. He has spent last the 12 years developing many data science solutions to solve problems from leading companies in the healthcare, manufacturing, pharmaceutical, and e-commerce domains. He currently works as data science manager at ZS Associates.  Prakash has a PhD in Industrial and System Engineering from Wisconsin-Madison, U.S. He gained his second PhD in Engineering at the University of Warwick, UK. He has a master’s degree from University of Wisconsin-Madison, U.S., and a bachelor’s degree from National Institute of Foundry and Forge Technology (NIFFT), India. He is co-founder of Warwick Analytics, which is based on his PhD work from the University of Warwick, UK. Prakash has been published widely in research areas of operational research and management, soft computing tools, and advanced algorithms in leading journals such as IEEE-Trans, EJOR, and IJPR among others. He edited an issue on "Intelligent Approaches to Complex Systems" and contributed to books such as Evolutionary Computing in Advanced Manufacturing published by Wiley and Algorithms and Data Structures using R published by Packt Publishing. Achyutuni Sri Krishna Rao is a data scientist, a civil engineer, and an author. He has spent the last four years developing many data science solutions to solve problems from leading companies in the healthcare, pharmaceutical, and manufacturing domains. He currently works as a data science consultant at ZS Associates. Sri Krishna’s background is a master’s in Enterprise Business Analytics and Machine Learning from the National University of Singapore, Singapore. He also has a bachelor’s degree from the National Institute of Technology Warangal, India.  Sri Krishna has been published widely in the research areas of civil engineering. He contributed to the book Algorithms and Data Structures using R published by Packt Publishing.
                starstarstarstarstar_half
                Self-driving cars have rapidly become one of the most transformative technologies to emerge . Fuelled by Deep Learning algorithms, they are continuously driving our society forward and creating new opportunities in the mobility sector. Deep Learning jobs command some of the highest salaries in the development world . This is the first, and only course which makes practical use of Deep Learning, and applies it to building a self-driving car , one of the most disruptive technologies in the world today . Learn & Master Deep Learning in this fun and exciting course with top instructor Rayan Slim. With over 28000 students, Rayan is a highly rated and experienced instructor who has followed a "learn by doing" style to create this amazing course. You'll go from beginner to Deep Learning expert and your instructor will complete each task with you step by step on screen. By the end of the course, you will have built a fully functional self-driving car fuelled entirely by Deep Learning. This powerful simulation will impress even the most senior developers and ensure you have hands on skills in neural networks that you can bring to any project or company. This course will show you how to: Use Computer Vision techniques via OpenCV to identify lane lines for a self-driving car . Learn to train a Perceptron-based Neural Network to classify between binary classes. Learn to train Convolutional Neural Networks to identify between various traffic signs. Train Deep Neural Networks to fit complex datasets. Master Keras , a power Neural Network library written in Python. Build and train a fully functional self driving car to drive on its own ! No experience required . This course is designed to take students with no programming/mathematics experience to accomplished Deep Learning developers. This course also comes with all the source code and friendly support in the Q&A area.
                  starstarstarstar_half star_border
                  The primary objective of this course is to teach you the practical hands-on skills you need to solve image classification problems - and in particular, multi-class classification. And all this, well, we shall be doing without bringing in unnecessary math logic behind it all. In this course you will learn about the most widely used type of deep neural networks (Convolution neural network). As used by top companies all over the world like Facebook and Google. You will learn how to use Keras in your applications to solve problems and package your models. Build a Rest API to serve your deep learning models.
                    starstarstarstarstar_half
                    This course covers the general workflow of a deep learning project, implemented using PyTorch in Google Colab. At the end of the course, students will be proficient at using Google Colab as well as PyTorch in their own projects. Students will also learn about the theoretical foundations for various deep learning models and techniques, as well as how to implement them using PyTorch. Finally, the course ends by offering an overview on general deep learning and how to think about problems in the field; students will gain a high-level understanding of the role deep learning plays in the field of AI. Learn how to utilize Google Colab as an online computing platform in deep learning projects, including running Python code, using a free GPU, and working with external files and folders Understand the general workflow of a deep learning project Examine the various APIs (datasets, modeling, training) PyTorch offers to facilitate deep learning Learn about the theoretical basis for various deep learning models such as convolutional networks or residual networks and what problems they address Gain an overview understanding of deep learning in the context of the artificial intelligence field and its best practices